Goto

Collaborating Authors

 solve ai


Tech firms claim nuclear will solve AI's power needs – they're wrong

New Scientist

Silicon Valley wants to use nuclear power to support the energy-hungry data centres that help train and deploy its artificial intelligence models. But realistic timelines show that any US nuclear renaissance will have at best a limited impact during a period of fast-rising electricity demand. Global electricity usage from data centres is already on track to double by 2026. In the US, data centres represent the fastest-growing source of energy demand at a time when the country's…


Why Technology Alone Can't Solve AI's Bias Problem - HBS Working Knowledge

#artificialintelligence

In a cluttered online world, few can resist the convenience of an automated ranking when deciding what movie to watch on Netflix or which seafood restaurant looks promising in a Google search. But when it comes to finding a job candidate or someone to do a basic household task, there's often a human toll to letting algorithms do the work. Searches on popular recruiting sites might seem like a neutral way to find prospective candidates, but their underlying technology can reinforce biases by excluding underrepresented groups, including women. For instance, research shows that women receive fewer employment reviews on the popular online freelancing site TaskRabbit compared to men with the same experience--and this lack of reviews can lower the rankings of women in talent search algorithms. "Maybe there is a bias from people who have been traditionally hiring men," explains Himabindu Lakkaraju, an assistant professor at Harvard Business School.


How to solve AI's "common sense" problem

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. In recent years, deep learning has taken great strides in some of the most challenging areas of artificial intelligence, including computer vision, speech recognition, and natural language processing. However, some problems remain unsolved. Deep learning systems are poor at handling novel situations, they require enormous amounts of data to train, and they sometimes make weird mistakes that confuse even their creators. Some scientists believe these problems will be solved by creating larger and larger neural networks and training them on bigger and bigger datasets.


How to solve AI's inequality problem

MIT Technology Review

His 2014 book, coauthored with Andrew McAfee, is called The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. But he says the thinking of AI researchers has been too limited. "I talk to many researchers, and they say: 'Our job is to make a machine that is like a human.' It's a clear vision," he says. But, he adds, "it's also kind of a lazy, low bar.'"


What Global Problems Can Solve AI

#artificialintelligence

Artificial Intelligence has many benefits, but one of the biggest benefits of it is faster technological advancements. Artificial intelligence is now widely used in research, which means it will quickly learn how to find results for many questions that the world is exploring. This means that researchers will be free to devise new parameters and objectives. Artificial Intelligence keeps developing, and that raises the question: will AI or robotics one day replace us in the workplace? Or will AI replace developers?


How To Solve AI's Bias Problem, Create Emotional AIs, And Democratize AI With Synthetic Data

#artificialintelligence

AI has the potential to change the world in many amazing ways. But like every revolution, it requires fuel. It's long been said that "data is the oil of the information age," and that's certainly true in many ways. But while data is a less finite resource than actual oil, it does come with some challenges. People are (rightly) protective of their personal data, and there are compliance and regulatory responsibilities that must be upheld if we're using that personal data (often the most valuable kind of data) to power AI and generate predictions.


Researchers were about to solve AI's black box problem, then the lawyers got involved

#artificialintelligence

AI has a "black box" problem. We cram data in one side of a machine learning system and we get results out the other, but we're often unsure what happens in the middle. Researchers and developers nearly had the issue licked, with "explainable algorithms" and "transparent AI" trending over the past few years. Black box AI isn't as complex as some experts make it out to be. Imagine you have 1,000,000 different spices and 1,000,000 different herbs and you only have a couple of hours to crack Kentucky Fried Chicken's secret recipe.


How to solve AI's reproducibility crisis

#artificialintelligence

Reproducibility is often trampled underfoot in AI's rush to results. And the movement to agile methodologies may only exacerbate AI's reproducibility crisis. Without reproducibility, you can't really know what your AI system is doing or will do, and that's a huge risk when you use AI for any critical work, from diagnosing medical conditions to driving trucks to screening for security threats to managing just-in-time production flows. Data scientists' natural inclination is to skimp on documentation in the interest of speed when developing, training, and iterating machine learning, deep learning, and other AI models. But reproducibility depends on knowing the sequence of steps that produced a specific data-driven AI model, process, or decision.


Comment: 'We can't leave Silicon Valley to solve AI's ethical issues'

#artificialintelligence

So, hands up who was woken up by Alexa this morning? Or now has Google Home finding their favourite radio station for them? Or had fun over the holidays trying to get Siri to tell them a joke? Artificial intelligence is now more accessible and becoming mainstream. The rapid development and evolution of AI technologies, while unleashing opportunities for business and communities across the world, have prompted a number of important overarching questions that go beyond the walls of academia and hi-tech research centres in Silicon Valley. Governments, business and the public alike are demanding more accountability in the way AI technologies are used, and are trying to find a solution to the legal and ethical issues that will derive from the growing integration of AI in people's daily lives.


Facebook, Google, Microsoft, IBM and Amazon partner to solve AI's ethical problem

#artificialintelligence

Artificial intelligence is becoming ubiquitous. As its reach grows and is engrained into consumer products and services, elements of control and regulation are required. Silicon Valley's biggest companies are joining forces to introduce this. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way. Each member of the Partnership on AI will contribute financial and research resources.